Goto

Collaborating Authors

 classification variable


A Max-relevance-min-divergence Criterion for Data Discretization with Applications on Naive Bayes

Wang, Shihe, Ren, Jianfeng, Bai, Ruibin, Yao, Yuan, Jiang, Xudong

arXiv.org Artificial Intelligence

In many classification models, data is discretized to better estimate its distribution. Existing discretization methods often target at maximizing the discriminant power of discretized data, while overlooking the fact that the primary target of data discretization in classification is to improve the generalization performance. As a result, the data tend to be over-split into many small bins since the data without discretization retain the maximal discriminant information. Thus, we propose a Max-Dependency-Min-Divergence (MDmD) criterion that maximizes both the discriminant information and generalization ability of the discretized data. More specifically, the Max-Dependency criterion maximizes the statistical dependency between the discretized data and the classification variable while the Min-Divergence criterion explicitly minimizes the JS-divergence between the training data and the validation data for a given discretization scheme. The proposed MDmD criterion is technically appealing, but it is difficult to reliably estimate the high-order joint distributions of attributes and the classification variable. We hence further propose a more practical solution, Max-Relevance-Min-Divergence (MRmD) discretization scheme, where each attribute is discretized separately, by simultaneously maximizing the discriminant information and the generalization ability of the discretized data. The proposed MRmD is compared with the state-of-the-art discretization algorithms under the naive Bayes classification framework on 45 machine-learning benchmark datasets. It significantly outperforms all the compared methods on most of the datasets.


Applications of Naive Bayes part1(Artificial Intelligence)

#artificialintelligence

Abstract: In many classification models, data is discretized to better estimate its distribution. Existing discretization methods often target at maximizing the discriminant power of discretized data, while overlooking the fact that the primary target of data discretization in classification is to improve the generalization performance. As a result, the data tend to be over-split into many small bins since the data without discretization retain the maximal discriminant information. Thus, we propose a Max-Dependency-Min-Divergence (MDmD) criterion that maximizes both the discriminant information and generalization ability of the discretized data. More specifically, the Max-Dependency criterion maximizes the statistical dependency between the discretized data and the classification variable while the Min-Divergence criterion explicitly minimizes the JS-divergence between the training data and the validation data for a given discretization scheme.


How To Predict Multiple Variables With One Model? And Why!

#artificialintelligence

When we start working with TensorFlow, we usually use the sequential format to create Models with the Keras library. With sequential models, we can solve many problems in all fields of deep learning. Whether they are image recognition or classification, Natural Language Processing, or Series Forecasting… they are models powerful enough to be used in a large majority of problems. But there are times when we need to go a little further in using Keras with TensorFlow. So, we can use the API for model creation, which opens up a wide world with many more possibilities that we did not have when using sequential models.